chore: release v4.5.0-rc.0#3563
Open
github-actions[bot] wants to merge 1 commit into
Open
Conversation
28fd35c to
9a0e7f8
Compare
9a0e7f8 to
e477aeb
Compare
e477aeb to
2dd9063
Compare
2dd9063 to
7390aa8
Compare
7390aa8 to
48d6326
Compare
48d6326 to
830f8c9
Compare
830f8c9 to
be2f622
Compare
be2f622 to
f428661
Compare
f428661 to
b5d0261
Compare
b5d0261 to
3ddf6fb
Compare
3ddf6fb to
9cf3d26
Compare
9cf3d26 to
cd032b9
Compare
cd032b9 to
6d73914
Compare
6d73914 to
c23e18a
Compare
c23e18a to
bf34386
Compare
Contributor
There was a problem hiding this comment.
🚩 Remaining .server-changes file not cleaned up
The PR deletes 14 .server-changes/*.md files as part of this release, but .server-changes/dev-cli-disconnect-md (note: missing .md extension) remains on disk. This file appears to be a pre-existing artifact — its odd filename (no .md extension) suggests it may have been created incorrectly. It won't be picked up by tooling that expects .md files. This is not introduced by this PR, but the release cleanup pass could have caught it.
Was this helpful? React with 👍 or 👎 to provide feedback.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
41 improvements, 1 bug fix.
Improvements
toAISDKTelemetry()(links every generation span back to the prompt) and withchat.agentviachat.prompt.set()+chat.toStreamTextOptions(). (#3629)prompts.define({ id, model, config, variables, content }). Every deploy creates a new version visible in the dashboard. Mustache-style placeholders ({{var}},{{#cond}}...{{/cond}}) with Zod / ArkType / Valibot-typed variables.prompt.resolve(vars, { version?, label? })returns the compiledtext, resolvedmodel,version, and labels. Standaloneprompts.resolve<typeof handle>(slug, vars)for cross-file resolution with full type inference on slug and variable shape.resolved.toAISDKTelemetry({ ...extra })into anygenerateText/streamTextcall and every generation span links to the prompt in the dashboard alongside its input variables, model, tokens, and cost.chat.agentintegration —chat.prompt.set(resolved)stores the resolved prompt run-scoped;chat.toStreamTextOptions({ registry })pullssystem,model(resolved via the AI SDK provider registry),temperature/maxTokens/ etc., and telemetry into a single spread forstreamText.prompts.list(),prompts.versions(slug),prompts.promote(slug, version),prompts.createOverride(slug, body),prompts.updateOverride(slug, body),prompts.removeOverride(slug),prompts.reactivateOverride(slug, version).onBoottochat.agent— a lifecycle hook that fires once per worker process picking up the chat. Runs for the initial run, preloaded runs, AND reactive continuation runs (post-cancel, crash,endRun,requestUpgrade, OOM retry), before any other hook. Use it to initializechat.local, open per-process resources, or re-hydrate state from your DB on continuation — anywhere the SAME run picking up after suspend/resume isn't enough. (#3543)useChatintegration — a customChatTransport(useTriggerChatTransport) plugs straight into Vercel AI SDK'suseChathook. Text streaming, tool calls, reasoning, anddata-*parts all work natively over Trigger.dev's realtime streams. No custom API routes needed.chat.headStart) — opt-in handler that runs the first turn'sstreamTextstep in your warm server process while the agent run boots in parallel, cutting cold-start TTFC by roughly half (measured 2801ms → 1218ms onclaude-sonnet-4-6). The agent owns step 2+ (tool execution, persistence, hooks) so heavy deps stay where they belong. Web Fetch handler works natively in Next.js, Hono, SvelteKit, Remix, Workers, etc.; bridge to Express/Fastify/Koa viachat.toNodeListener. New@trigger.dev/sdk/chat-serversubpath.resume: truereconnects vialastEventIdso clients only see new chunks.sessions.listenumerates chats for inbox-style UIs.hydrateMessagesto be the source of truth yourself.onPreload,onChatStart,onValidateMessages,hydrateMessages,onTurnStart,onBeforeTurnComplete,onTurnComplete,onChatSuspend,onChatResume— for persistence, validation, and post-turn work.transport.stopGeneration(chatId)aborts mid-stream; the run stays alive for the next message, partial response is captured, and aborted parts (stuckpartial-calltools, in-progress reasoning) are auto-cleaned.needsApproval: truepause until the user approves or denies viaaddToolApprovalResponse. The runtime reconciles the updated assistant message by ID and continuesstreamText.pendingMessagesinjects user messages between tool-call steps so users can steer the agent mid-execution;chat.inject()+chat.defer()adds context from background work (self-review, RAG, safety checks) between turns.transport.sendAction. FirehydrateMessages+onActiononly — no turn hooks, norun().onActioncan return aStreamTextResultfor a model response, orvoidfor side-effect-only.chat.local<T>for per-run state accessible from hooks,run(), tools, and subtasks (auto-serialized throughai.toolExecute);chat.storefor typed shared data between agent and client;chat.historyfor reading and mutating the message chain;clientDataSchemafor typedclientDatain every hook.chat.toStreamTextOptions()— one spread intostreamTextwires up versioned system Prompts, model resolution, telemetry metadata, compaction, steering, and background injection.multiTab: true+useMultiTabChatprevents duplicate sends and syncs state across browser tabs viaBroadcastChannel. Non-active tabs go read-only with live updates.online/ tab refocus / bfcache restore,Last-Event-IDmid-stream resume. No app code needed.externalId. A Session is the unit of state that owns a multi-run conversation: messages flow through.in, responses through.out, both survive run boundaries. Sessions back the newchat.agentruntime, and you can build on them directly for any pattern that needs durable bi-directional streaming across runs. (#3542)ai.toolExecute(task)so you can wire a Trigger subtask in as theexecutehandler of an AI SDKtool()while definingdescriptionandinputSchemayourself — useful when you want full control over the tool surface and just need Trigger's subtask machinery for the body. (#3546)regionto the runs list / retrieve API: filter runs by region (runs.list({ region: "..." })/filter[region]=<masterQueue>) and read each run's executing region from the newregionfield on the response. (#3612)idempotencyKeyvalues at the API boundary so they no longer trip an internal size limit on the underlying unique index and surface as a generic 500. Inputs are capped at 2048 characters — well above whatidempotencyKeys.create()produces (a 64-character hash) and above any realistic raw key. Applies totasks.trigger,tasks.batchTrigger,batch.create(Phase 1 streaming batches),wait.createToken,wait.forDuration, and the input/session stream waitpoint endpoints. Over-limit requests now return a structured 400 instead. (#3560)useChatintegration — a customChatTransport(useTriggerChatTransport) plugs straight into Vercel AI SDK'suseChathook. Text streaming, tool calls, reasoning, anddata-*parts all work natively over Trigger.dev's realtime streams. No custom API routes needed.chat.headStart) — opt-in handler that runs the first turn'sstreamTextstep in your warm server process while the agent run boots in parallel, cutting cold-start TTFC by roughly half (measured 2801ms → 1218ms onclaude-sonnet-4-6). The agent owns step 2+ (tool execution, persistence, hooks) so heavy deps stay where they belong. Web Fetch handler works natively in Next.js, Hono, SvelteKit, Remix, Workers, etc.; bridge to Express/Fastify/Koa viachat.toNodeListener. New@trigger.dev/sdk/chat-serversubpath.resume: truereconnects vialastEventIdso clients only see new chunks.sessions.listenumerates chats for inbox-style UIs.hydrateMessagesto be the source of truth yourself.onPreload,onChatStart,onValidateMessages,hydrateMessages,onTurnStart,onBeforeTurnComplete,onTurnComplete,onChatSuspend,onChatResume— for persistence, validation, and post-turn work.transport.stopGeneration(chatId)aborts mid-stream; the run stays alive for the next message, partial response is captured, and aborted parts (stuckpartial-calltools, in-progress reasoning) are auto-cleaned.needsApproval: truepause until the user approves or denies viaaddToolApprovalResponse. The runtime reconciles the updated assistant message by ID and continuesstreamText.pendingMessagesinjects user messages between tool-call steps so users can steer the agent mid-execution;chat.inject()+chat.defer()adds context from background work (self-review, RAG, safety checks) between turns.transport.sendAction. FirehydrateMessages+onActiononly — no turn hooks, norun().onActioncan return aStreamTextResultfor a model response, orvoidfor side-effect-only.chat.local<T>for per-run state accessible from hooks,run(), tools, and subtasks (auto-serialized throughai.toolExecute);chat.storefor typed shared data between agent and client;chat.historyfor reading and mutating the message chain;clientDataSchemafor typedclientDatain every hook.chat.toStreamTextOptions()— one spread intostreamTextwires up versioned system Prompts, model resolution, telemetry metadata, compaction, steering, and background injection.multiTab: true+useMultiTabChatprevents duplicate sends and syncs state across browser tabs viaBroadcastChannel. Non-active tabs go read-only with live updates.online/ tab refocus / bfcache restore,Last-Event-IDmid-stream resume. No app code needed.TASK_PROCESS_SIGSEGVtask crashes under the user's retry policy instead of failing the run on the first segfault. SIGSEGV in Node tasks is frequently non-deterministic (native addon races, JIT/GC interaction, near-OOM in native code, host issues), so retrying on a fresh process often succeeds. The retry is gated by the task's existingretryconfig +maxAttempts— same pathTASK_PROCESS_SIGTERMand uncaught exceptions already use — so tasks without a retry policy still fail fast. (#3552)Bug fixes
LocalsKey<T>type incompatibility across dual-package builds. The phantom value-type brand no longer uses a module-levelunique symbol, so a single TypeScript compilation that resolves the type from both the ESM and CJS outputs (which can happen under certain pnpm hoisting layouts) no longer sees two structurally-incompatible variants of the same type. (#3626)Raw changeset output
mainis currently in pre mode so this branch has prereleases rather than normal releases. If you want to exit prereleases, runchangeset pre exitonmain.Releases
@trigger.dev/sdk@4.5.0-rc.0
Minor Changes
AI Prompts — define prompt templates as code alongside your tasks, version them on deploy, and override the text or model from the dashboard without redeploying. Prompts integrate with the Vercel AI SDK via
toAISDKTelemetry()(links every generation span back to the prompt) and withchat.agentviachat.prompt.set()+chat.toStreamTextOptions(). (#3629)What you get:
prompts.define({ id, model, config, variables, content }). Every deploy creates a new version visible in the dashboard. Mustache-style placeholders ({{var}},{{#cond}}...{{/cond}}) with Zod / ArkType / Valibot-typed variables.prompt.resolve(vars, { version?, label? })returns the compiledtext, resolvedmodel,version, and labels. Standaloneprompts.resolve<typeof handle>(slug, vars)for cross-file resolution with full type inference on slug and variable shape.resolved.toAISDKTelemetry({ ...extra })into anygenerateText/streamTextcall and every generation span links to the prompt in the dashboard alongside its input variables, model, tokens, and cost.chat.agentintegration —chat.prompt.set(resolved)stores the resolved prompt run-scoped;chat.toStreamTextOptions({ registry })pullssystem,model(resolved via the AI SDK provider registry),temperature/maxTokens/ etc., and telemetry into a single spread forstreamText.prompts.list(),prompts.versions(slug),prompts.promote(slug, version),prompts.createOverride(slug, body),prompts.updateOverride(slug, body),prompts.removeOverride(slug),prompts.reactivateOverride(slug, version).See /docs/ai/prompts for the full reference — template syntax, version resolution order, override workflow, and type utilities (
PromptHandle,PromptIdentifier,PromptVariables).Adds
onBoottochat.agent— a lifecycle hook that fires once per worker process picking up the chat. Runs for the initial run, preloaded runs, AND reactive continuation runs (post-cancel, crash,endRun,requestUpgrade, OOM retry), before any other hook. Use it to initializechat.local, open per-process resources, or re-hydrate state from your DB on continuation — anywhere the SAME run picking up after suspend/resume isn't enough. (#3543)Use
onBoot(notonChatStart) for state setup that must run every time a worker picks up the chat —onChatStartfires once per chat and won't run on continuation, leavingchat.localuninitialized whenrun()tries to use it.AI Agents — run AI SDK chat completions as durable Trigger.dev agents instead of fragile API routes. Define an agent in one function, point
useChatat it from React, and the conversation survives page refreshes, network blips, and process restarts. (#3543)What you get:
useChatintegration — a customChatTransport(useTriggerChatTransport) plugs straight into Vercel AI SDK'suseChathook. Text streaming, tool calls, reasoning, anddata-*parts all work natively over Trigger.dev's realtime streams. No custom API routes needed.chat.headStart) — opt-in handler that runs the first turn'sstreamTextstep in your warm server process while the agent run boots in parallel, cutting cold-start TTFC by roughly half (measured 2801ms → 1218ms onclaude-sonnet-4-6). The agent owns step 2+ (tool execution, persistence, hooks) so heavy deps stay where they belong. Web Fetch handler works natively in Next.js, Hono, SvelteKit, Remix, Workers, etc.; bridge to Express/Fastify/Koa viachat.toNodeListener. New@trigger.dev/sdk/chat-serversubpath.resume: truereconnects vialastEventIdso clients only see new chunks.sessions.listenumerates chats for inbox-style UIs.hydrateMessagesto be the source of truth yourself.onPreload,onChatStart,onValidateMessages,hydrateMessages,onTurnStart,onBeforeTurnComplete,onTurnComplete,onChatSuspend,onChatResume— for persistence, validation, and post-turn work.transport.stopGeneration(chatId)aborts mid-stream; the run stays alive for the next message, partial response is captured, and aborted parts (stuckpartial-calltools, in-progress reasoning) are auto-cleaned.needsApproval: truepause until the user approves or denies viaaddToolApprovalResponse. The runtime reconciles the updated assistant message by ID and continuesstreamText.pendingMessagesinjects user messages between tool-call steps so users can steer the agent mid-execution;chat.inject()+chat.defer()adds context from background work (self-review, RAG, safety checks) between turns.transport.sendAction. FirehydrateMessages+onActiononly — no turn hooks, norun().onActioncan return aStreamTextResultfor a model response, orvoidfor side-effect-only.chat.local<T>for per-run state accessible from hooks,run(), tools, and subtasks (auto-serialized throughai.toolExecute);chat.storefor typed shared data between agent and client;chat.historyfor reading and mutating the message chain;clientDataSchemafor typedclientDatain every hook.chat.toStreamTextOptions()— one spread intostreamTextwires up versioned system Prompts, model resolution, telemetry metadata, compaction, steering, and background injection.multiTab: true+useMultiTabChatprevents duplicate sends and syncs state across browser tabs viaBroadcastChannel. Non-active tabs go read-only with live updates.online/ tab refocus / bfcache restore,Last-Event-IDmid-stream resume. No app code needed.See /docs/ai-chat for the full surface — quick start, three backend approaches (
chat.agent,chat.createSession, raw task), persistence and code-sandbox patterns, type-level guides, and API reference.Add read primitives to
chat.historyfor HITL flows:getPendingToolCalls(),getResolvedToolCalls(),extractNewToolResults(message),getChain(), andfindMessage(messageId). These lift the accumulator-walking logic that customers building human-in-the-loop tools were re-implementing into the SDK. (#3543)Use
getPendingToolCalls()to gate fresh user turns while a tool call is awaiting an answer. UseextractNewToolResults(message)to dedup tool results when persisting to your own store — the helper returns only the parts whosetoolCallIdis not already resolved on the chain.Sessions — a durable, run-aware stream channel keyed on a stable
externalId. A Session is the unit of state that owns a multi-run conversation: messages flow through.in, responses through.out, both survive run boundaries. Sessions back the newchat.agentruntime, and you can build on them directly for any pattern that needs durable bi-directional streaming across runs. (#3542)See /docs/ai-chat/overview for the full surface — Sessions powers the durable, resumable chat runtime described there.
Patch Changes
Add Agent Skills for
chat.agent. Drop a folder with aSKILL.mdand any helper scripts/references next to your task code, register it withskills.define({ id, path }), and the CLI bundles it into the deploy image automatically — notrigger.config.tschanges. The agent gets a one-line summary in its system prompt and discovers full instructions on demand vialoadSkill, withbashandreadFiletools scoped per-skill (path-traversal guards, output caps, abort-signal propagation). (#3543)Built on the AI SDK cookbook pattern — portable across providers. SDK + CLI only for now; dashboard-editable
SKILL.mdtext is on the roadmap.Add
ai.toolExecute(task)so you can wire a Trigger subtask in as theexecutehandler of an AI SDKtool()while definingdescriptionandinputSchemayourself — useful when you want full control over the tool surface and just need Trigger's subtask machinery for the body. (#3546)ai.tool(task)(toolFromTask) keeps doing the all-in-one wrap and now aligns its return type with AI SDK'sToolSet. Minimumaipeer raised to^6.0.116to avoid cross-versionToolSetmismatches in monorepos.Stamp
gen_ai.conversation.id(the chat id) on every span and metric emitted from inside achat.taskorchat.agentrun. Lets you filter dashboard spans, runs, and metrics by the chat conversation that produced them — independent of the run boundary, so multi-run chats correlate cleanly. No code changes required on the user side. (#3543)Unit-test
chat.agentdefinitions offline withmockChatAgentfrom@trigger.dev/sdk/ai/test. Drives a real agent's turn loop in-process — no network, no task runtime — so you can send messages, actions, and stop signals via driver methods, inspect captured output chunks, and verify hooks fire. Pairs withMockLanguageModelV3fromai/testfor model mocking.setupLocalslets you pre-seedlocals(DB clients, service stubs) beforerun()starts. (#3543)The broader
runInMockTaskContextharness it's built on lives at@trigger.dev/core/v3/test— useful for unit-testing any task code, not just chat.Add
regionto the runs list / retrieve API: filter runs by region (runs.list({ region: "..." })/filter[region]=<masterQueue>) and read each run's executing region from the newregionfield on the response. (#3612)Updated dependencies:
@trigger.dev/core@4.5.0-rc.0@trigger.dev/build@4.5.0-rc.0
Patch Changes
Add Agent Skills for
chat.agent. Drop a folder with aSKILL.mdand any helper scripts/references next to your task code, register it withskills.define({ id, path }), and the CLI bundles it into the deploy image automatically — notrigger.config.tschanges. The agent gets a one-line summary in its system prompt and discovers full instructions on demand vialoadSkill, withbashandreadFiletools scoped per-skill (path-traversal guards, output caps, abort-signal propagation). (#3543)Built on the AI SDK cookbook pattern — portable across providers. SDK + CLI only for now; dashboard-editable
SKILL.mdtext is on the roadmap.Updated dependencies:
@trigger.dev/core@4.5.0-rc.0trigger.dev@4.5.0-rc.0
Patch Changes
Add Agent Skills for
chat.agent. Drop a folder with aSKILL.mdand any helper scripts/references next to your task code, register it withskills.define({ id, path }), and the CLI bundles it into the deploy image automatically — notrigger.config.tschanges. The agent gets a one-line summary in its system prompt and discovers full instructions on demand vialoadSkill, withbashandreadFiletools scoped per-skill (path-traversal guards, output caps, abort-signal propagation). (#3543)Built on the AI SDK cookbook pattern — portable across providers. SDK + CLI only for now; dashboard-editable
SKILL.mdtext is on the roadmap.The CLI MCP server's agent-chat tools (
start_agent_chat,send_agent_message,close_agent_chat) now run on the new Sessions primitive, so AI assistants driving achat.agentget the same idempotent-by-chatId, durable-across-runs behavior the browser transport gets. Required PAT scopes go fromwrite:inputStreamstoread:sessions+write:sessions. (#3546)MCP
list_runstool: add aregionfilter input and surface each run's executing region in the formatted summary. (#3612)Updated dependencies:
@trigger.dev/core@4.5.0-rc.0@trigger.dev/build@4.5.0-rc.0@trigger.dev/schema-to-json@4.5.0-rc.0@trigger.dev/core@4.5.0-rc.0
Patch Changes
Add Agent Skills for
chat.agent. Drop a folder with aSKILL.mdand any helper scripts/references next to your task code, register it withskills.define({ id, path }), and the CLI bundles it into the deploy image automatically — notrigger.config.tschanges. The agent gets a one-line summary in its system prompt and discovers full instructions on demand vialoadSkill, withbashandreadFiletools scoped per-skill (path-traversal guards, output caps, abort-signal propagation). (#3543)Built on the AI SDK cookbook pattern — portable across providers. SDK + CLI only for now; dashboard-editable
SKILL.mdtext is on the roadmap.Reject overlong
idempotencyKeyvalues at the API boundary so they no longer trip an internal size limit on the underlying unique index and surface as a generic 500. Inputs are capped at 2048 characters — well above whatidempotencyKeys.create()produces (a 64-character hash) and above any realistic raw key. Applies totasks.trigger,tasks.batchTrigger,batch.create(Phase 1 streaming batches),wait.createToken,wait.forDuration, and the input/session stream waitpoint endpoints. Over-limit requests now return a structured 400 instead. (#3560)AI Agents — run AI SDK chat completions as durable Trigger.dev agents instead of fragile API routes. Define an agent in one function, point
useChatat it from React, and the conversation survives page refreshes, network blips, and process restarts. (#3543)What you get:
useChatintegration — a customChatTransport(useTriggerChatTransport) plugs straight into Vercel AI SDK'suseChathook. Text streaming, tool calls, reasoning, anddata-*parts all work natively over Trigger.dev's realtime streams. No custom API routes needed.chat.headStart) — opt-in handler that runs the first turn'sstreamTextstep in your warm server process while the agent run boots in parallel, cutting cold-start TTFC by roughly half (measured 2801ms → 1218ms onclaude-sonnet-4-6). The agent owns step 2+ (tool execution, persistence, hooks) so heavy deps stay where they belong. Web Fetch handler works natively in Next.js, Hono, SvelteKit, Remix, Workers, etc.; bridge to Express/Fastify/Koa viachat.toNodeListener. New@trigger.dev/sdk/chat-serversubpath.resume: truereconnects vialastEventIdso clients only see new chunks.sessions.listenumerates chats for inbox-style UIs.hydrateMessagesto be the source of truth yourself.onPreload,onChatStart,onValidateMessages,hydrateMessages,onTurnStart,onBeforeTurnComplete,onTurnComplete,onChatSuspend,onChatResume— for persistence, validation, and post-turn work.transport.stopGeneration(chatId)aborts mid-stream; the run stays alive for the next message, partial response is captured, and aborted parts (stuckpartial-calltools, in-progress reasoning) are auto-cleaned.needsApproval: truepause until the user approves or denies viaaddToolApprovalResponse. The runtime reconciles the updated assistant message by ID and continuesstreamText.pendingMessagesinjects user messages between tool-call steps so users can steer the agent mid-execution;chat.inject()+chat.defer()adds context from background work (self-review, RAG, safety checks) between turns.transport.sendAction. FirehydrateMessages+onActiononly — no turn hooks, norun().onActioncan return aStreamTextResultfor a model response, orvoidfor side-effect-only.chat.local<T>for per-run state accessible from hooks,run(), tools, and subtasks (auto-serialized throughai.toolExecute);chat.storefor typed shared data between agent and client;chat.historyfor reading and mutating the message chain;clientDataSchemafor typedclientDatain every hook.chat.toStreamTextOptions()— one spread intostreamTextwires up versioned system Prompts, model resolution, telemetry metadata, compaction, steering, and background injection.multiTab: true+useMultiTabChatprevents duplicate sends and syncs state across browser tabs viaBroadcastChannel. Non-active tabs go read-only with live updates.online/ tab refocus / bfcache restore,Last-Event-IDmid-stream resume. No app code needed.See /docs/ai-chat for the full surface — quick start, three backend approaches (
chat.agent,chat.createSession, raw task), persistence and code-sandbox patterns, type-level guides, and API reference.Stamp
gen_ai.conversation.id(the chat id) on every span and metric emitted from inside achat.taskorchat.agentrun. Lets you filter dashboard spans, runs, and metrics by the chat conversation that produced them — independent of the run boundary, so multi-run chats correlate cleanly. No code changes required on the user side. (#3543)Fix
LocalsKey<T>type incompatibility across dual-package builds. The phantom value-type brand no longer uses a module-levelunique symbol, so a single TypeScript compilation that resolves the type from both the ESM and CJS outputs (which can happen under certain pnpm hoisting layouts) no longer sees two structurally-incompatible variants of the same type. (#3626)Unit-test
chat.agentdefinitions offline withmockChatAgentfrom@trigger.dev/sdk/ai/test. Drives a real agent's turn loop in-process — no network, no task runtime — so you can send messages, actions, and stop signals via driver methods, inspect captured output chunks, and verify hooks fire. Pairs withMockLanguageModelV3fromai/testfor model mocking.setupLocalslets you pre-seedlocals(DB clients, service stubs) beforerun()starts. (#3543)The broader
runInMockTaskContextharness it's built on lives at@trigger.dev/core/v3/test— useful for unit-testing any task code, not just chat.Retry
TASK_PROCESS_SIGSEGVtask crashes under the user's retry policy instead of failing the run on the first segfault. SIGSEGV in Node tasks is frequently non-deterministic (native addon races, JIT/GC interaction, near-OOM in native code, host issues), so retrying on a fresh process often succeeds. The retry is gated by the task's existingretryconfig +maxAttempts— same pathTASK_PROCESS_SIGTERMand uncaught exceptions already use — so tasks without a retry policy still fail fast. (#3552)Add
regionto the runs list / retrieve API: filter runs by region (runs.list({ region: "..." })/filter[region]=<masterQueue>) and read each run's executing region from the newregionfield on the response. (#3612)Sessions — a durable, run-aware stream channel keyed on a stable
externalId. A Session is the unit of state that owns a multi-run conversation: messages flow through.in, responses through.out, both survive run boundaries. Sessions back the newchat.agentruntime, and you can build on them directly for any pattern that needs durable bi-directional streaming across runs. (#3542)See /docs/ai-chat/overview for the full surface — Sessions powers the durable, resumable chat runtime described there.
@trigger.dev/plugins@4.5.0-rc.0
Patch Changes
@trigger.dev/core@4.5.0-rc.0@trigger.dev/python@4.5.0-rc.0
Patch Changes
@trigger.dev/sdk@4.5.0-rc.0@trigger.dev/core@4.5.0-rc.0@trigger.dev/build@4.5.0-rc.0@trigger.dev/react-hooks@4.5.0-rc.0
Patch Changes
@trigger.dev/core@4.5.0-rc.0@trigger.dev/redis-worker@4.5.0-rc.0
Patch Changes
@trigger.dev/core@4.5.0-rc.0@trigger.dev/rsc@4.5.0-rc.0
Patch Changes
@trigger.dev/core@4.5.0-rc.0@trigger.dev/schema-to-json@4.5.0-rc.0
Patch Changes
@trigger.dev/core@4.5.0-rc.0